51 research outputs found

    Optimal Clustering under Uncertainty

    Full text link
    Classical clustering algorithms typically either lack an underlying probability framework to make them predictive or focus on parameter estimation rather than defining and minimizing a notion of error. Recent work addresses these issues by developing a probabilistic framework based on the theory of random labeled point processes and characterizing a Bayes clusterer that minimizes the number of misclustered points. The Bayes clusterer is analogous to the Bayes classifier. Whereas determining a Bayes classifier requires full knowledge of the feature-label distribution, deriving a Bayes clusterer requires full knowledge of the point process. When uncertain of the point process, one would like to find a robust clusterer that is optimal over the uncertainty, just as one may find optimal robust classifiers with uncertain feature-label distributions. Herein, we derive an optimal robust clusterer by first finding an effective random point process that incorporates all randomness within its own probabilistic structure and from which a Bayes clusterer can be derived that provides an optimal robust clusterer relative to the uncertainty. This is analogous to the use of effective class-conditional distributions in robust classification. After evaluating the performance of robust clusterers in synthetic mixtures of Gaussians models, we apply the framework to granular imaging, where we make use of the asymptotic granulometric moment theory for granular images to relate robust clustering theory to the application.Comment: 19 pages, 5 eps figures, 1 tabl

    Prospectus, September 24, 1986

    Get PDF
    https://spark.parkland.edu/prospectus_1986/1022/thumbnail.jp

    Prevalent, protective, and convergent IgG recognition of SARS-CoV-2 non-RBD spike epitopes

    Get PDF
    The molecular composition and binding epitopes of the immunoglobulin G (IgG) antibodies that circulate in blood plasma following SARS-CoV-2 infection are unknown. Proteomic deconvolution of the IgG repertoire to the spike glycoprotein in convalescent subjects revealed that the response is directed predominantly (>80%) against epitopes residing outside the receptor-binding domain (RBD). In one subject, just four IgG lineages accounted for 93.5% of the response, including an N-terminal domain (NTD)-directed antibody that was protective against lethal viral challenge. Genetic, structural, and functional characterization of a multi-donor class of “public” antibodies revealed an NTD epitope that is recurrently mutated among emerging SARS-CoV-2 variants of concern. These data show that “public” NTD-directed and other non-RBD plasma antibodies are prevalent and have implications for SARS-CoV-2 protection and antibody escape

    Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure

    No full text
    Abstract Background Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. Results The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Conclusions Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks
    • …
    corecore